457 research outputs found
AND and/or OR: Uniform Polynomial-Size Circuits
We investigate the complexity of uniform OR circuits and AND circuits of
polynomial-size and depth. As their name suggests, OR circuits have OR gates as
their computation gates, as well as the usual input, output and constant (0/1)
gates. As is the norm for Boolean circuits, our circuits have multiple sink
gates, which implies that an OR circuit computes an OR function on some subset
of its input variables. Determining that subset amounts to solving a number of
reachability questions on a polynomial-size directed graph (which input gates
are connected to the output gate?), taken from a very sparse set of graphs.
However, it is not obvious whether or not this (restricted) reachability
problem can be solved, by say, uniform AC^0 circuits (constant depth,
polynomial-size, AND, OR, NOT gates). This is one reason why characterizing the
power of these simple-looking circuits in terms of uniform classes turns out to
be intriguing. Another is that the model itself seems particularly natural and
worthy of study.
Our goal is the systematic characterization of uniform polynomial-size OR
circuits, and AND circuits, in terms of known uniform machine-based complexity
classes. In particular, we consider the languages reducible to such uniform
families of OR circuits, and AND circuits, under a variety of reduction types.
We give upper and lower bounds on the computational power of these language
classes. We find that these complexity classes are closely related to tallyNL,
the set of unary languages within NL, and to sets reducible to tallyNL.
Specifically, for a variety of types of reductions (many-one, conjunctive truth
table, disjunctive truth table, truth table, Turing) we give characterizations
of languages reducible to OR circuit classes in terms of languages reducible to
tallyNL classes. Then, some of these OR classes are shown to coincide, and some
are proven to be distinct. We give analogous results for AND circuits. Finally,
for many of our OR circuit classes, and analogous AND circuit classes, we prove
whether or not the two classes coincide, although we leave one such inclusion
open.Comment: In Proceedings MCU 2013, arXiv:1309.104
After The Crash-The Ground and the Sublime in the Works of Hito Steyerl and Trevor Paglen
An analysis into how fact, fiction, and perspective are affected in contemporary digital economies, through the work of artists Hito Steyerl and Trevor Paglen
Detecting large-scale structure in the era of petabyte/gigaparsec Astronomy
In this thesis, we present a study of the identification of large-scale structure in optical astronomical surveys. This encompasses the detection of large connected structures of alaxies in spectroscopic datasets and of galaxy clusters in deep photometric surveys.
Beginning with a survey featuring full 3D galaxy data, in chapter 2 we present a method to identify filamentary structure after accounting for the line-of-sight velocity distortions characteristic of the virialised systems we search for. We compare data from a real galaxy survey to a series of realistic mocks. Despite broad similarities between the two, we find models do not reproduce the argest observed structures. To evaluate the exploration of a multi-band survey lacking spectroscopy, we simulate the effects of photometric redshift uncertainties on galaxy redshifts. Our findings provide limits on the accuracy of photometric redshift estimators required to recover the same diverse range of structures detected in the original spectroscopic survey.
As an alternative means of exploiting the deep multi-band photometric data common to wide-area observational campaigns, in chapter 3 we present a red sequence-based algorithm to detect galaxy clusters with Voronoi diagrams. This algorithm makes no prior assumptions about cluster properties other than the similarity in colour of their members, and an enhanced projected surface density. Testing the algorithm with mock galaxy survey data reveals a detection performance equalling or exceeding that of alternative detection algorithms.
Chapter 4 describes the application of this algorithm to a survey with deep SDSS photometry. The scientific exploitation of cluster detections from this survey is ongoing, but work presented here shows: agreement with the red sequence slope evolution derived from semi-analytic galaxy models, evidence stellar age is not responsible for responsible for the sequence slope, and a well-defined colour-colour track of potential use in photometric cluster redshift stimation. We detail improvements made to the cluster algorithm in chapter 5. Through a series of case studies we verify our approach successfully identifies galaxy clusters in a diverse range of surveys, from volumes spanning to deep near-IR detections at . based on our findings, we expect the Pan-STARRS large-area survey to identify over clusters and groups.
In chapter 6, we explore the characteristics of randomly-distributed noise in Voronoi diagrams. We verify the model traditionally used to describe the distribution of Voronoi cell areas in Poisson data fails to describe the frequency of high-density random cells. Because high-density cells resemble those expected from a population of galaxy cluster members, using a large dataset generated in this study we propose an alternative model that better estimates the frequency of their areas. This new model may in the future be used to improve Voronoi-based recovery of clustered data in a diverse range of applications, both astronomical and otherwise
Uniformity is weaker than semi-uniformity for some membrane systems
We investigate computing models that are presented as families of finite
computing devices with a uniformity condition on the entire family. Examples of
such models include Boolean circuits, membrane systems, DNA computers, chemical
reaction networks and tile assembly systems, and there are many others.
However, in such models there are actually two distinct kinds of uniformity
condition. The first is the most common and well-understood, where each input
length is mapped to a single computing device (e.g. a Boolean circuit) that
computes on the finite set of inputs of that length. The second, called
semi-uniformity, is where each input is mapped to a computing device for that
input (e.g. a circuit with the input encoded as constants). The former notion
is well-known and used in Boolean circuit complexity, while the latter notion
is frequently found in literature on nature-inspired computation from the past
20 years or so.
Are these two notions distinct? For many models it has been found that these
notions are in fact the same, in the sense that the choice of uniformity or
semi-uniformity leads to characterisations of the same complexity classes. In
other related work, we showed that these notions are actually distinct for
certain classes of Boolean circuits. Here, we give analogous results for
membrane systems by showing that certain classes of uniform membrane systems
are strictly weaker than the analogous semi-uniform classes. This solves a
known open problem in the theory of membrane systems. We then go on to present
results towards characterising the power of these semi-uniform and uniform
membrane models in terms of NL and languages reducible to the unary languages
in NL, respectively.Comment: 28 pages, 1 figur
On acceptance conditions for membrane systems: characterisations of L and NL
In this paper we investigate the affect of various acceptance conditions on
recogniser membrane systems without dissolution. We demonstrate that two
particular acceptance conditions (one easier to program, the other easier to
prove correctness) both characterise the same complexity class, NL. We also
find that by restricting the acceptance conditions we obtain a characterisation
of L. We obtain these results by investigating the connectivity properties of
dependency graphs that model membrane system computations
First Steps Towards Linking Membrane Depth and the Polynomial Hierarchy
In this paper we take the first steps in studying possible connections between
non-elementary division with limited membrane depth and the levels of the Polynomial
Hierarchy. We present a uniform family with a membrane structure of depth d + 1 that
solves a problem complete for level d of the Polynomial Hierarchy
Computational Processes and Incompleteness
We introduce a formal definition of Wolfram's notion of computational process
based on cellular automata, a physics-like model of computation. There is a
natural classification of these processes into decidable, intermediate and
complete. It is shown that in the context of standard finite injury priority
arguments one cannot establish the existence of an intermediate computational
process
The Pagoda Sequence: a Ramble through Linear Complexity, Number Walls, D0L Sequences, Finite State Automata, and Aperiodic Tilings
We review the concept of the number wall as an alternative to the traditional
linear complexity profile (LCP), and sketch the relationship to other topics
such as linear feedback shift-register (LFSR) and context-free Lindenmayer
(D0L) sequences. A remarkable ternary analogue of the Thue-Morse sequence is
introduced having deficiency 2 modulo 3, and this property verified via the
re-interpretation of the number wall as an aperiodic plane tiling
A Concrete View of Rule 110 Computation
Rule 110 is a cellular automaton that performs repeated simultaneous updates
of an infinite row of binary values. The values are updated in the following
way: 0s are changed to 1s at all positions where the value to the right is a 1,
while 1s are changed to 0s at all positions where the values to the left and
right are both 1. Though trivial to define, the behavior exhibited by Rule 110
is surprisingly intricate, and in (Cook, 2004) we showed that it is capable of
emulating the activity of a Turing machine by encoding the Turing machine and
its tape into a repeating left pattern, a central pattern, and a repeating
right pattern, which Rule 110 then acts on. In this paper we provide an
explicit compiler for converting a Turing machine into a Rule 110 initial
state, and we present a general approach for proving that such constructions
will work as intended. The simulation was originally assumed to require
exponential time, but surprising results of Neary and Woods (2006) have shown
that in fact, only polynomial time is required. We use the methods of Neary and
Woods to exhibit a direct simulation of a Turing machine by a tag system in
polynomial time
On the boundaries of solvability and unsolvability in tag systems. Theoretical and Experimental Results
Several older and more recent results on the boundaries of solvability and
unsolvability in tag systems are surveyed. Emphasis will be put on the
significance of computer experiments in research on very small tag systems
- …